27. Improving Minimax

## Modern Variants of Minimax

Minimax with alpha beta pruning is useful - especially as a teaching tool - because it provides strong theoretical guarantees about selecting optimal moves, but in most real-world applications we're often willing to trade some of those guarantees for better empirical performance. There are several variants of Minimax that change or relax different assumptions in order to achieve stronger performance.

We will not cover these algorithms in detail (they rely on the same principles you've already seen). You are encouraged to experiment with these variants

  • Negamax - the minimax algorithm can be simplified for zero-sum games

  • Principle Variation Search (i.e., Negascout) - combining negamax with good move ordering determined by null window searches to outperform alpha-beta pruning (i.e., start with a wide cutoff window and shrink it)

  • Memory-enhanced Test Driver family (e.g., MTD-f) - performs a series of searches starting with only null window searches while keeping a memory store of visited states (i.e., start with a zero-width window and grow it); empirically shown to be the most efficient minimax search

  • Best Node Search - repeatedly guess and check values for the bounds on the minimax value of each branch; empirically shown to be the most efficient minimax algorithm on random trees

## Going Beyond Minimax

Adversarial Search has been a very active topic in the past several years. Adversarial

  • Monte Carlo Tree Search
  • Reinforcement Learning policy